Measuring the similarity between patches in images is a fundamental buildingblock in various tasks. Naturally, the patch-size has a major impact on thematching quality, and on the consequent application performance. Under theassumption that our patch database is sufficiently sampled, using large patches(e.g. 21-by-21) should be preferred over small ones (e.g. 7-by-7). However,this "dense-sampling" assumption is rarely true; in most cases large patchescannot find relevant nearby examples. This phenomenon is a consequence of thecurse of dimensionality, stating that the database-size should growexponentially with the patch-size to ensure proper matches. This explains thefavored choice of small patch-size in most applications. Is there a way to keep the simplicity and work with small patches whilegetting some of the benefits that large patches provide? In this work we offersuch an approach. We propose to concatenate the regular content of aconventional (small) patch with a compact representation of its (large)surroundings - its context. Therefore, with a minor increase of the dimensions(e.g. with additional 10 values to the patch representation), weimplicitly/softly describe the information of a large patch. The additionaldescriptors are computed based on a self-similarity behavior of the patchsurrounding. We show that this approach achieves better matches, compared to the use ofconventional-size patches, without the need to increase the database-size.Also, the effectiveness of the proposed method is tested on three distinctproblems: (i) External natural image denoising, (ii) Depth imagesuper-resolution, and (iii) Motion-compensated frame-rate up-conversion.
展开▼